Next Article in Journal
Monitoring and Forecasting the Impact of the 2018 Summer Heatwave on Vegetation
Next Article in Special Issue
Evaluation of Chlorophyll-a and POC MODIS Aqua Products in the Southern Ocean
Previous Article in Journal
Storm Event to Seasonal Evolution of Nearshore Bathymetry Derived from Shore-Based Video Imagery
Previous Article in Special Issue
Ice Surface Temperature Retrieval from a Single Satellite Imager Band
Open AccessArticle
Peer-Review Record

The Detection and Characterization of Arctic Sea Ice Leads with Satellite Imagers

Remote Sens. 2019, 11(5), 521; https://doi.org/10.3390/rs11050521
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2019, 11(5), 521; https://doi.org/10.3390/rs11050521
Received: 29 January 2019 / Revised: 15 February 2019 / Accepted: 27 February 2019 / Published: 4 March 2019
(This article belongs to the Special Issue Satellite Derived Global Ocean Product Validation/Evaluation)

Round 1

Reviewer 1 Report

My comments have been mainly addressed. The manuscript should be proofread. There is an unfinished sentence on lines 383-384. I still feel the quality of the figures should be improved, but will leave that to the journal. It does not reduce the scientific merit of the study.

Author Response

1.       There is an unfinished sentence on lines 383-384.

This is an editorial error that only appears in the pdf version, it does not appear in the Word version of the article. We will make sure it is compiled correctly in the final submission.

 

2.       I still feel the quality of the figures should be improved, but will leave that to the journal. It does not reduce the scientific merit of the study.

We feel that all of the figures are of high quality, we will leave this up to the journal.


Reviewer 2 Report

Dear editorial board members of RS and authors of the manuscript

remotesensing-445574,


The authors have replied to all the comments and taken the

comments into account where possible with a reasonable work.

The manuscript is quite ready for publication now.

I'd like to thank the authors for their good work.


Just a couple of very minor comments to be considered by the

authors:


1) I commented the sentences on the great circle, P12 L285-

P13 L287 in the revised manuscript. I did not quite understand

the sentences and I think it is still a bit difficult to

understand, it would be better to change the sentences e.g.

to "The great circle distance and azimuth angle are calculated

BETWEEN the start and end points. The segment width is another

derived characteristic, found by dividing the segment area by

THE great circle DISTANCE." (or possibly "great circle

segment length"?).

At least I understand the "great circle length" to be the

length of the whole great circle, not just the length of a segment

of the great circle. This was the original reason for my earlier

comment.


P23 L432: "...find leads in on the order..."

I guess You mean "in the order" not "in on the order".


Sincerely,


Author Response

1.       I commented the sentences on the great circle, P12 L285-P13 L287 in the revised manuscript. I did not quite understand the sentences and I think it is still a bit difficult to understand, it would be better to change the sentences e.g. to "The great circle distance and azimuth angle are calculated BETWEEN the start and end points. The segment width is another derived characteristic, found by dividing the segment area by THE great circle DISTANCE." (or possibly "great circle segment length"?).

 

At least I understand the "great circle length" to be the length of the whole great circle, not just the length of a segment of the great circle. This was the original reason for my earlier comment.

Recommended word changes have been made.

 

2.       P23 L432: "...find leads in on the order..." I guess You mean "in the order" not "in on the order".

Changed “on the order of” to “approximately”.


Reviewer 3 Report

see attached file

Comments for author File: Comments.pdf

Author Response

1.       In Section 3.3 why did you not compare pixel-wise yours and Willmes and Heinemann’s leads?

We did do a pixel-wise comparison (Fig 11 & 12).  This is stressed in the revised manuscript. Our main interest is on the year-by-year difference shown in Table 2.

 

2.       “Comparing the statistics with and without the product codependency, the results are similar.” What does this mean? Both lead products gives roughly equal daily/yearly lead fractions over the Arctic?

We changed the sentence to “Comparing the statistics with and without the overlapping coverage of both products, the results are similar.”

Both daily and yearly lead fractions are similar, for brevity we only showed a table with yearly results.  Also this is why we did not report the statistics from the comparisons in Fig 11 & 12, the numbers are similar to what is shown in Table 2. This discussion is added in the revised manuscript

 

3.       In Section 3.3, Figure 17, do you have yet any explanations for the observed distributions?

Some discussion has been added.

 

4.        “Willmes and Heinemann ([16] and [17]) does not extend as far south and we limit the view angle. Cloud coverage results in some differences as well. We apply a cloud screening technique that allows for some lead retrievals in areas where the ice surface temperature is not retrieved, and therefore Willmes and Heinemann ([16] and [17]) cannot process a locations for leads.”

 

I don’t think you mention earlier explicitly that Willmes and Heinemann used MODIS ice surface temperature product for the their lead detection.

 

On line 149, we point out an algorithm difference being that we use the 11 micron BT instead of a derived surface temperature product – and more discussion of the difference appears in Section 5.

 

5.       In Discussion Section you explain why only data for Jan-Apr is used, which is acceptable, but a reader ma wonder in Section 3.3 why only data for these months are used. You could explain the issue already in Section 3.3.

We believe this is a topic better left for the discussion section. Text has been added to refer the reader to the discussion section when the dataset is introduced in section 3.3.

6.       Can you estimate at which air temperatures your lead detection starts to have too large errors?

The 11 micron window is nearly transparent to the atmospheric temperatures.  It would be warmer surface temperatures that would cause problems detecting leads – when the ice surface temperature is nearly the same as the water surface temperature. Some discussion has been added:

“ We use Level1B 11 mm brightness temperatures [28,29] rather than the MODIS MxD29 ice surface temperature product [42-44]. The MODIS noise-equivalent temperature difference is 0.05 K at 11 um channel, which is sufficiently accurate for lead detection. There is, of course, a strong correlation between ice surface temperature and the 11 mm brightness temperature. Also, the actual surface temperature is less important than the contrast in temperature. Leads become undetectable when the thermal contrast between leads and surrounding ice pack is small (e.g. less than 1.5K, although the local spatial variability is also a factor, as described earlier). The primary cause of thermal contrast in the Arctic winter would be leads – the contrast between solid ice and either open water, ice and water mixed, or thin ice. In warmer seasons the contrast between ice temperature and water temperature becomes smaller. Detection capabilities decreases as the surface temperature increases. In summary, the temperature contrast becomes small as ice becomes thicker within a lead or when the surface temperature approaches the melting point of ice in the case of an unfrozen lead.

 

7.       Do you think cloud data from atmospheric reanalyses could be used to improve MODIS data cloud masking?

Our goal is to produce a product that could be generated in real-time; therefore we have not considered the use of reanalysis products. Though it is not related to our work, I do not think cloud data from atmospheric reanalyses could be used to improve MODIS data cloud masking.

 

8.       How the measurement accuracy of the MODIS brightness temperatures effect your lead detection by thermal contrast?

 

“The MODIS imager is sufficiently accurate for lead detection….”

 

What is sufficient?

We added a sentence “The MODIS noise-equivalent temperature difference is 0.05 K at 11 um channel, which is sufficiently accurate for lead detection.”

 

9.       “A pixel is identified as a potential sea ice lead if it has an 11 um brightness temperature the is both 1.5 K greater than the mean, and greater than the standard deviation of the brightness temperature of its 25 by 25 pixel surrounding area.”

 

How the measurement accuracy of the 11 um brightness temperature compares to this 1.5 K threshold? Please, discuss in the paper.

See response to previous comment.

 

10.   So thermal contrast lead detection is conducted with original MOD02 data which has bowtie effect? Please give this detail clearly.

 

“Sentence added: Because the scan angle is limited to 30 the remapping does not encounter any bowtie artifacts.”

The explanation in the article is “In addition, we constrain the MODIS scan angle to 30⁰ within nadir, due to the degradation of spatial resolution at larger sensor viewing angles.”  We do not feel we need to explicitly name bowtie effect as it falls under the umbrella of degradation at larger sensor angles.

 

11.   If I understood correctly, you conduct first map projection of the MODIS data, and then lead detection, Thus, the bowtie effect is not present?

Correct, thermal contrast detection or potential lead identification is done in the native satellite projection, and only for the subset of the data where the scan angle is less than 30 degrees; this portion of the scan has been chosen because the bowtie effect is no present.

 

12.   Why following was removed from Conclusions?

 

 

“Leads can contribute to cloud formation, and it may be that cloud coverage is increasing because leads are increasing.”

 

It would have been nice for you to include a track-changes version of the paper in order to better see how the paper was edited.

The sentence was removed because this will be a topic more rigorously addressed in future work; it would have been a claim made without sufficient supporting evidence.

In response to another reviewer’s comment, the sentence was removed from an earlier draft. Track-changes are being used  to show changes since the previously reviewed draft.  The decision was made not to preserving the changes from all drafts because there have been so many changes, it would be hard to identify which changes are new to the current draft vs which changes were made to earlier drafts.

 

Detailed comments

13.   Figure 2: show what color each step corresponds.

The green boxes correspond to Step 1 (section 2.1.1), blue to Step 2 (section 2.1.2), and grey to Step 3 (section 2.1.3); description added to caption.

 

14.   In pages 8 and 9 same figure two times; same find of paper layout error. The same figure also appears in page 24.

This is an editorial error, this appears in the pdf but not Word version of the document. We will confirm it is correct in the galley proofs.

 

15.   Add distance scales to Figures 5 and 10.

Added text in caption to the 200 x 200 km box.

 

16.   In Section 3.3 for Figures 13 and 14 give the size of pole-hole.

Text added to Section 2 that describes the coverage gap at the pole. (No coverage north of 81N).

 

17.   Page 19, line 383:” A comparison of the time-series of our results and the Willmes and Heinemann [37] analysis is resented in”: missing words in the end, ‘in Table 2” I guess at least.

Yes, “in Table 2” appears in the author’s version of the document. This was an error introduced in the publisher’s pdf version.

Round 2

Reviewer 3 Report

"Our goal is to produce a product that could be generated in real-time; therefore we have not considered the use of reanalysis products."

You should emphasize this 'real time' product goal in your paper. It seems that it is not mentioned now.

Figure 6 is again once in wrong place.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Review of manuscript

The Detection and Characterization of Arctic Sea Ice Leads with Satellite Imagers

by

Jay P. Hoffmann et al.

 

Summary

The paper describes an approach to detect Arctic sea-ice leads in thermal-infrared satellite imagery. Surface brightness temperatures from MODIS band 31 (11 μm) are used to retrieve potential leads based on the thermal contrast between cold sea ice and warmer thin ice / open water areas. The primary retrieval is followed by a cascade of post-processing techniques that aim at 1) removing artefacts in the preliminary segmentation results and 2) deducting spatial characteristics, e.g. width and orientation, of what the authors refer to as lead branches, i.e. spatial sub-units of complex lead structures. The results are used to present the inter-annual variability of different lead characteristics for sub-regions in the Arctic.

 

General comment

While the technical approach suggested here is surely valuable and has the potential to put forward the available methods for lead detections in the Arctic the presentation, documentation and discussion of the results is not yet convincing and requires improvements before publication. My main concern with the presented manuscript is that the claimed benefits and advantages of the lead characterization part of the sophisticated processing chain are not documented adequately and performance of the method is not really addressed in a sense of validation. I suggest the paper to be worked over with emphasis being put on the validation and the documentation of the technical approach.

I will try to itemize my main critics and provide suggestions to improve the manuscript with the following annotations:

 

Specific annotations and questions

Is there a special reason for using a brightness temperature anomaly threshold of 1.5 K to segment leads from sea ice? Is this threshold critical to the final results?

Figure 3: I wonder why this field is not shown with the same projection as Figure 4 . Maybe the two figures can even be combined to one with 2 subplots (same for figures 5 and 6).

P7, L146 ff: The satellite path convergence at higher latitudes might cause a lead detection bias if fewer overpasses will lower the lead detection probability. Introducing the counts of clear-sky observations as an additional metric does not seem to be an extensive adjustment of the algorithm. So the reader might wonder why this will only be introduced in a “future version” of the method.

P7, L170: How can a Sobel filter help connecting discontinuous “sub-resolution features”. Isn’t it just detecting edges?

P8, L174: “…related groups of discontinuous objects are combined”. This needs additional explanation. What makes groups being related to one another? If binary opening/closure is applied here it should be specifically mentioned and maybe documented in a plot.

P8, L182: “… in two or more overpasses within a day.” This makes a lead detection more probable at higher latitudes (see above)

P9, P10: The technical descriptions in this section should definitely be supported by an illustration, i.e. a figure where the reader can comprehend what impact this has on the first potential lead map (Fig. 5)

P9, L201: Is the Hough transform applied to an entire MODIS tile? If so, what happens if multiple lines are detected, which I suspect will be quite often the case?

P9, L125ff: Why are lead edges removed? What is the reason for disconnecting objects here that were previously connected by the algorithm?

Figure 7: The legend is way too large and several instances mentioned here are not covered by the examples above. Is the confidence of this approach validated by any means, e.g. a manual reclassification or data from visible channels? I think it should be, at least for a case study. The presented figure (Fig. 8) does not provide sufficient insight into the general performance of this approach.

As a reader of the manuscript I would have like to see one or two daily pan-Arctic composites of the detected leads after the corrections were applied, especially because that is what is being supplied as data a product.

Supplement S1: I have the impression that this animation does not show “Daily Lead Detections”, because leads appear rather stationary and persistent. It appears to me as if a moving aggregate or moving average were applied, which introduces a smoothing to the data. Please check!!

Figure 9: The scale of this figure ranges from 0 to 10+. So if January to April are considered (120 days), a case of 10 leads being detected means that the lead frequency is only about 8%, even for the hot spots. This appears quite low to me and requires at least being adressed in the discussion.

Figure 11, 12: The orientation and width of leads should be exemplarily shown for one daily field so that the reader can better understand from what kind of daily fields this analysis originates.


Author Response

Reply to Review Report #1

1.       Is there a special reason for using a brightness temperature anomaly threshold of 1.5 K to segment leads from sea ice? Is this threshold critical to the final results?

This mimics Willmes approach


2.       Figure 3: I wonder why this field is not shown with the same projection as Figure 4 . Maybe the two figures can even be combined to one with 2 subplots (same for figures 5 and 6).

Figure adjusted following the suggestion of Reviewer 2; only a small region of interest is shown.


3.       P7, L146 ff: The satellite path convergence at higher latitudes might cause a lead detection bias if fewer overpasses will lower the lead detection probability. Introducing the counts of clear-sky observations as an additional metric does not seem to be an extensive adjustment of the algorithm. So the reader might wonder why this will only be introduced in a “future version” of the method.

Text added for clarification, a user can infer higher confidence when the lead has been detected multiple times. A new paragraph has been adding in the discussion section to expand on the concept.


4.       P7, L170: How can a Sobel filter help connecting discontinuous “sub-resolution features”. Isn’t it just detecting edges?

The filter does do more than edge detection, a figure has been added to show the general concept and a figure showing an example case where the filter is applied in the algorithm.


5.       P8, L174: “…related groups of discontinuous objects are combined”. This needs additional explanation. What makes groups being related to one another? If binary opening/closure is applied here it should be specifically mentioned and maybe documented in a plot.

New figure added (see previous).


6.       P8, L182: “… in two or more overpasses within a day.” This makes a lead detection more probable at higher latitudes (see above)

True.  This is one reason we are limit our domain to north of 66.5°N.


7.       P9, P10: The technical descriptions in this section should definitely be supported by an illustration, i.e. a figure where the reader can comprehend what impact this has on the first potential lead map (Fig. 5)

Figure added that illustrates the algorithm steps.


8.       P9, L201: Is the Hough transform applied to an entire MODIS tile? If so, what happens if multiple lines are detected, which I suspect will be quite often the case?

Clarification added to the text, example shown in new figure. Test is applied to the mask of the number of times a potential lead has been detected (not native MODIS imagery).  The longest line is found, the object that is associated with that line is subjected to more testing, that object is removed from the mask, and the process is repeated until every object (that is large enough) has been processed.

 

9.       P9, L125ff: Why are lead edges removed? What is the reason for disconnecting objects here that were previously connected by the algorithm?

Figure added for clarity of identification of lead branches.


10.   Figure 7: The legend is way too large and several instances mentioned here are not covered by the examples above. Is the confidence of this approach validated by any means, e.g. a manual reclassification or data from visible channels? I think it should be, at least for a case study. The presented figure (Fig. 8) does not provide sufficient insight into the general performance of this approach.

Legend for synthetic figure now presented in Table 1. Comparisons with results from other leads products added.


11.   As a reader of the manuscript I would have like to see one or two daily pan-Arctic composites of the detected leads after the corrections were applied, especially because that is what is being supplied as data a product.

A comparison figure has been added showing our product and how it compares against the Rohrs AMSR-E product and the Willmes MODIS product. One figure showing the entire region, one figure shows an enlargement of one small region.


12.   Supplement S1: I have the impression that this animation does not show “Daily Lead Detections”, because leads appear rather stationary and persistent. It appears to me as if a moving aggregate or moving average were applied, which introduces a smoothing to the data. Please check!!

A legend has been added to the animation for clarity. Leads appear red on the day they are detected, the location of that lead changes to white and then fades away as the detection ages.  The intent is not to imply persistent detections (a persistent detection would continue to appear as red). The white and shades of gray are used only to illustrate the recent history of leads.  Without this, the frame-rate of the animation leads would cause the leads to appear only as flashes of red and it would be hard to remember where any of these flashes occurred from day-to-day.


13.   Figure 9: The scale of this figure ranges from 0 to 10+. So if January to April are considered (120 days), a case of 10 leads being detected means that the lead frequency is only about 8%, even for the hot spots. This appears quite low to me and requires at least being adressed in the discussion.

Yes, our detection frequency tends to be lower due to the more rigorous shape testing.  A table for comparison against another leads product has been added.  This is also discussed in the new discussion section.


14.   Figure 11, 12: The orientation and width of leads should be exemplarily shown for one daily field so that the reader can better understand from what kind of daily fields this analysis originates.

The information is contained in text files available on the ftp site; readme files are provided online to help use the files.


Reviewer 2 Report

Please see the attached file.

Comments for author File: Comments.pdf

Author Response

Reply to Review Report #2

Major comments

1.       It’s not clear how well this lead detection method is performing in comparison to previously published methods. A direct comparison should be made, ideally with verification data from an independent source.

Comparisons figures with Rohrs AMSR-E and Willmes MODIS product have been added. A table compares a time-series of results.


2.       The results are barely discussed. Were the authors expecting to see the results they have in Figs 10-12? What do they mean? How would these results be different if a different method was used? It is fine to discuss this in future work, but then it’s not clear why these results are presented here when the time/space could be utilized to explain the method better.

More emphasis has been placed on explaining the detection method and how the results compare against other methods.


3.       Many of the quantities in Fig 12 are not well defined in the manuscript (for example, branch area, branch width vs bulk area and bulk width).

Description of bulk and branch quantities are described in steps 2 and 3 – sections 2.1.2 and 2.1.3.


4.       The method is overall vague on details in terms of how it was evaluated. For example, it is stated (line 100) that another cloud mask was used to remove false cloud detection over leads, and a threshold of 50% for that mask was used, but it’s not stated why that mask was chosen, or how many images were used to come to 50%.

Fraser  et al provide the details of the cloud mask filter.  Figure 3 illustrates why the threshold was chosen.


5.       The figures in the paper look very unprofessional. For example, Figure 3 is of what region (lat/lon)? It looks stretched. It is too big and the reader does not know what part of the image to focus on.

Made Fig 3 smaller, focused on a single group of errant cloud detections.  Lat/lon lines are omitted because many of the features run parallel to lines of constant latitude or longitude, we do not want the navigation information to interfere. Boxes are used to highlight regions of interest and the location of those boxes are shown on images that show the entire region.


6.       The lat/lon should be shown for all figures, legends should be removed or tidied up (eg. Fig 5-Fig 9). Why are the titles in Fig 11 different colours?

Adding lat/lon in the figures would make the figures too busy, more text has been added to inform the reader of the area of interest.  Legends have been adjusted.  As indicated in the caption, the colors in Fig 12 (previously Fig 11) correspond to the color coded regions in Fig 11 (previously Fig 10)


7.       I don’t understand why a lead will have a brightness temperature less than 271K - wouldn’t that mean the lead is frozen water?

Leads pixels are colder than the nominal freezing point because a portion of the pixel contains frozen water. Our method is mean to detect leads surrounded by sea ice, we try to not detect the interface between sea ice and open water.


8.       How was an area of 25 by 25 pixels chosen for the lead detection? How was the threshold of 1.5K chosen?

This mimics Willmes approach.


9.       It would be nice to have a map of the Arctic showing the average number of overpasses/day for a given pixel. I also don’t see why the authors have put off a confidence rating for a future study. In fact I would argue that right now the results are less meaningful because the number of overpasses/day are not taken into account when the results are presented, and that a confidence rating (if straightforward) should be presented in the manuscript where the method is explained.

The map of clear overpasses is more informative than a map of total overpasses.  Discussion of detection confidence has been added to the discussion section.


10.   The shapes in Fig 7 that fail to be identified as leads are obviously not leads. Can the authors provide other shapes that look more like one might encounter in sea ice cover and use those in their testing? As it is Fig 7 is not very insightful.

The idea here was to use this in combination with real data, we do provide sample results with the real data too.


11.   The way the algorithm is written it is very difficult to understand. For example:

– The steps in lines 162-164 should be explained more clearly.

A figure that illustrates the steps of the algorithm have been added along with some new text for clarity of descriptions.

– What is the Sobel filter applied to? Is it the image that counts the number of potential leads/day, or some kind of binary mask derived from that image?

New Figure shows an example of how Sobel filter is applied.

– lines 200-208 - how is a subregion defined? what ‘final tests’ (line 206) are carried out?

Text added and a figure illustrates an example.

– lines ‘217-218’ - what do you mean restored to their original size?

A diagram has been added and text to clarify.

 

Minor comments

12.   line 34 - remove ‘in response’

Done


13.   line 59 - ‘were also’ should be ‘have also been’

Done


14.   line 60 - would be good to state spatial resolution used for the passive microwave lead-detection product

The following has been added to the text: “...the 18.7 and 89 GHz brightness temperatures were mapped to a 6.25 km grid and an emissivity ratio method was used to detect thin ice. A spatial high-pass filter was employed to retain linear thin-ice areas. It was determined that subpixel-resolution leads could be identified.”


15.   line 64 - it is not because open water is used for calibration, that satellite altimetry can be used

Agreed, and that part of the sentence has been deleted. A brief description of the methodology used by Zakharova et al. has been added.


16.   line 138 - mapped to standard ease grid - what resolution of ease grid was used?

First defined on line 95, reiterated as 1k on line 138


17.   line 201 - longest remaining linear - please correct

Added word “feature” to complete sentence.

 


Reviewer 3 Report

see attached file

Comments for author File: Comments.pdf

Author Response

Reply to Review Report #3

General comments

1.       The paper should have Dataset Section where used datasets and their processing are described in detail, e.g. MOD02, MOD35 – what overall cloudmask was used: “confident clear” or “probably clear”?, source for landmask, calculation of TB at band 31, details of EASE2-Grid – was MOD03 product needed to process MOD02 data to EASE2-Grid? Give out also data sources.

Methods section changed to “Data and Methods”. Dataset details added to the text.


2.       I assume that both daytime and nighttime data were used, and based on the paper cloudmask for both datasets are further processed (5x5 convolution kernel), but in your Lead ATBD doc: “When the solar zenith angle is less than 85 degrees, the unmodified cloud mask is used, elsewhere the mask is modified using a spatial filter to remove clouds for night-time overpasses (Fraser and Massom 2009).” Which one is the correct case?

 Clarification added to the text, the cloud filter is applied only at night (SZA > 85), cloud mask is not modified during the day (SZA < 85).

 

3.       The discussion on previous studies on lead detection is not very comprehensive, e.g. SAR based lead detection:

a.       Dmitrii MURASHKIN et al., Method for detection of leads from Sentinel-1 SAR images, Annals of Glaciology (2018), Page 1 of 13.

b.       This paper has also nice discussion on previous studies.

                                                               i.      David Bröhan and Lars Kaleschke, A Nine-Year Climatology of Arctic Sea Ice Lead Orientation and Frequency from AMSR-E, Remote Sens. 2014, 6, 1451-1475; doi:10.3390/rs6021451

c.       So your study is not the first one about lead orientation as claimed in end of Introduction.

d.       There is also a study about CryoSat-2 RA lead detection for lead area fraction and width distribution estimation, not just about lead detection for sea ice thickness retrieval.

                                                               i.      A. Wernecke and L. Kaleschke, Lead detection in Arctic sea ice from CryoSat-2: quality assessment, lead area fraction and width distribution, The Cryosphere, 9, 1955–1968, 2015.

Thank you for these valuable suggestions. The three references that you provided plus three more have been added and briefly described in the introduction.


4.       The lead detection starts from thermal contrast between leads and sea ice, and the authors are not discussing under which environmental conditions lead detection is possible. I would assume that when thick ice surface temperature approaches freezing temperature then lead detection is not possible. The authors have calculated lead data for months Jan-Apr, why not also for winter months of Nov and Dec? Can in Apr the conditions be too warm for lead detection in marginal ice zone, e.g. Barents Sea? I think the authors must include discussion on this issue.

Discussion added, Jan-Apr was chosen to be consistent with other studies, cloud coverage becomes more problematic and the detection method is not designed to work in warmer conditions.


5.       The authors validate their lead algorithm only by synthetic test data. They should also include at least few case studies where detected leads in the MODIS data are compared to a SAR or fine resolution optical imagery (I assume this is possible in Apr).

SAR addressed in introduction. Comparison against the Rohrs et al AMSR-E and the Willmes et al MODIS product.

 

6.       The paper is lacking Discussion Section, from RS template:

a.       “Authors should discuss the results and how they can be interpreted in perspective of previous studies and of the working hypotheses. The findings and their implications should be discussed in the broadest context possible. Future research directions may also be highlighted.”

b.       You could discuss how your lead algorithm and results compare to previous studies. There is a previous lead fraction dataset: https://doi.pangaea.de/10.1594/PANGAEA.854411

Discussion Section added.  Comparison with the Rohrs et al AMSR-E and the Willmes et al MODIS product presented.


7.       What are possible flaws in the current algorithm, e.g. can there still be clouds detected as leads? Discuss about possible future improvements, and how will you analyze the multiyear MODIS lead data? Can you estimate what is typically maximum ice thickness of detected leads? Do think you cloud data from atmospheric reanalyses could be used to improve MODIS data cloud masking, see

a.       Paul, S.; Willmes, S.; Heinemann, G. Long-term coastal-polynya dynamics in the southern Weddell Sea from MODIS thermal-infrared imagery. Cryosphere 2015, 9, 2027–2041.

Discussion section added. Reference added.


8.       Can your method applied to optical imagery in spring/summertime? Or are melt ponds a problem? Maybe before and after melt ponding?

We do not expect the algorithm to perform well in warmer conditions, paragraph added to discussion section.


9.       The paper would be benefit from more detailed analysis of the retrieved lead data, but this is left further studies. I guess this is acceptable.

Yes, we plan to follow up with more detailed analysis in a future article.

 

Detailed comments

1. Introduction

10.   page 2, l. 54: “ice concentration for each tile scene”

a.       What is this tile, a single swath image?

The use of "tile" wasn't important and didn't add anything in this context, so the term was deleted.


11.   p. 2, l. 62: “Synthetic Aperture Radar (SAR) provides the best spatial resolution of microwave sensors but is limited in coverage, both spatial and temporal [19].”

b.       I don’t think this is the case anymore, we nowadays have two Sentinel-1 SARs, RADARSAT-2, PALSAR2, etc. Reference here is over 10 years old.

Newer references have been added to the introduction.


2. Methods

12.   Figure 1 caption text is in bold, as are in all other figures. Bold font is not used here in RS papers.

Bold font has been changed


13.   Figure 1: give full grid information

Reference added in caption.

14.   2.1.1. Step 1: Aggregate imager data, identify potential leads.

a.       You could discuss why leads are sometimes erroneously flagged as clouds. Is the following the reason? The MOD29 IST data has also an artefact from the MODIS cloud masking algorithm, which uses the Near-real-time Ice and Snow Extent (NISE) product from NSIDC to initialize the sea ice background flag that is used to direct algorithm processing flow. The NISE resolution is only 25 km.

b.       ref: Riggs, G.; Hall, D. MODIS Sea Ice Products User Guide to Collection 6; NSIDC: Boulder, CO, USA, 2015.

Reference and discussion added; it is hard to derive ice temperature when the cloud mask detects clouds, we are able to screen out likely erroneous (night-time) and use the 11 micron BT; ice surface temperature retrievals would not be flagged as cloudy in these locations.


15.   How the measurement accuracy of the MODIS brightness temperatures effect your lead detection by thermal contrast?

The MODIS imager is sufficiently accurate for lead detection.  The 11 micron brightness temperature adequetly approximates the surface temperate – in cloudfree regions.  The thermal contrast between water and ice is more important that actual temperature retrieval.


16.   p. 6, l. 120: “Willmes and Heinemann [15]”

a.       Should be [16].

Reference number updated


17.   p. 6, l. 124: “In the example, potential leads are in black, clouds are white, and the water that does not meet the thermal criteria of a lead is gray.

a.       Should this be sea ice?

We don’t differentiate, “water” could mean sea water or sea ice (we exclude freshwater from the mask)

 

18.   p. 6, l. 137: “During this composition step, the masks from each satellite granule are mapped into a standard EASE2-Grid.”

a.       So thermal contrast lead detection is conducted with original MOD02 data which has bowtie effect? Please give this detail clearly.

Sentence added: “Because the scan angle is limited to 30° the remapping does not encounter any bowtie artifacts.”

 

19.   p. 7 and Figure 5:

a.       I assume that sea ice drift may also produce low potential lead count, and cause lead smearing. Is this correct, and if so, can you remedy it?

Discussion added about fast moving leads.  We believe it is acceptable to omit fast moving leads because the alternative would be count each observation at a new location as if the lead were detected at that spot for the whole day – which would result in an overrepresentation of daily lead area.

 

2.1.2. Step 2: Bulk Lead detection.

20.   Give all details of the Sobel image filter to conduct the filtering as you did, if there are such. Or put details to ATBD and refer it in the paper.

Text and figures have been added to describe the application of the image filter. Sobel reference is given.

 

2.2. Output Format

21.   The netcdf file has quite limited supporting information, e.g. EASE2 grid info is missing.

A lat/lon file has been posted on the ftp site for the product.


3. Results

22.   Add some introduction about the content of the following sub-Sections.

Text and figure have been added to describe the technique.


3.2. Case Study

23.   “Users can choose to include lead rejection categories – available in the end product - if there is an application that is less sensitive to commission error and more sensitive to omission error.”

a.       You could summarize these lead rejection categories.

A table has been added to summarize the rejection categories. This sentence has been removed from Section 3.2 and a paragraph is been added in the discussion section.


3.3. Lead Timeseries

24.   Figure 9: Very hard to see any details as sub-figures are so small. Split into e.g. two figures which cover two pages?

Figures split into two pages


25.   Figure 11: color coding of different years is not very clear. The names of regions could be in black color.

The intent of using similar colors for the years is to reinforce that there is little variability; I could use 16 more distinct colors, but they would still overlap and be hard to distinguish – I believe using all shades of blue is more appealing.  The intent of coloring the names of the regions is to match them with the color coded regional map.


26.   Figure 12 and p. 13, l. 283: “Again, there is little inter-annual variability.” You could discuss why there is little inter-annual variability.

The intent was to introduce capabilities of the algorithm and reserve trend analysis to for future work.


4. Conclusions

27.   l. 317: “The observed trend in lead detection is either flat or decreasing; this may be because cloud coverage is making leads increasingly difficult to detect.”

b.       This is very interesting and important, and should discussed on Results and/or Discussion.

This sentence moved to the lead time series section and discussion added.


Supplementary Materials:

28.   The following are available online at www.mdpi.com/xxx/s1, Video S1: 2003-2018 327 Daily Leads Detections

c.       This link was not working. There is Youtube video, but it should have info on color coding.

Updated version has a legend.  The editor has not yet supplied a url for the uploaded video file.


Reviewer 4 Report

Dear Remote Sensing editor and authors of the manuscript
remotesensing-413358.R1,

Thank You for the authors for the revised version of the
manuscript. The manuscript has significantly been improved
from the previous version. There are still some things
needing clarification and correction.

In general, there are many tests and phases included in the
proposed algorithm. The selection of the parameters seems
quite heuristic and has not very well been described.
There are e.g. many threshold values. The selection of the
parameters should be described in more detail: on which
data set they are based (an artificial data set or on real
MODIS (which part of) data), what would be the effect of
varying each parameter. At least it should be said that e.g.
"parameter A was selected experimentally based on data set
B".

In the revised manuscript the results have been compared to
two other methods. This is a good addition. It would still
be good to have some kind of comparison to interpretations
of some high-resolution optical/IR data (over suitable
small areas with data possibly available). It may be
difficult to find suitable reference data because of the
Arctic darkness during the winter, but if any possible data
could be found some information on how well the method is
capable of detecting narrow leads could be obtained and
something could be said e.g. on what is the lower limit
of lead width detectable by the method.

The method concentrates on the winter time. It would also
be nice to get some kind of impression on how the method
works in wet surface conditions (melting period).

Some more detailed comments:

P3, Fig. 1 caption: "north of 65" -> "north of
latitude 65N"

P4-5, Table 1 colors coding: Not all the colors can well
be distinguished from each other (in a printed version),
e.g. the dark blue colors and black. Consider replacing
the colors with better distinguishable ones.

P6, Fig. 3: Include the corresponding gray-scale image
as a separate panel in Fig. 3 or indicate it by a polygon
in Fig. 1.

P6 L131-132: What is the reason for comparing the
brightness temperature to its standard deviation?

P7 L134: "...of the near-infrared (NIR) brightness
and..." -> "...of the near-infrared (NIR) brightness
temperature and..."

P7 L137: "...nominal freezing point of salt water."
Salt water of which salinity? Give a reference.

P10 L187: a threshold of two pixels has been, how
has this been selected?

P10 L181: "...lead mask (panel b)" -> "...lead mask
(Fig. 7, panel b)"

P L185: "The region is considered open water..." ->
"The region is considered to be open water..." or
possibly "as".

P12, Sobel filter: Could binary morphological dilation
with a suitable dilation mask be applied instead?

P12: Many filter (thresholds) parameters are presented
here, but their selection is somewhat unclear. Please,
include the information on how the parameters were
selected. It would probably be convenient to include
a table of all the algorithm parameters, their suitable
values and on how they were selected.

P13 L 238-239: Hought Transform (HT) is used. Please,
include the assumption why HT has been applied. I think
it can be something like: "The leads can be assumed to
be polylines and for this reason HT locating linear
in imagery has been applied."

P13: There are also some parameters (thresholds) and
information on their selection method would be preferable.

P13 L257: "A morphological erosion..." what is the mask
of the morphological erosion, is it a 3x3 rectangular
block or a circular block or something else?

P14 L274-275: Explain "region code" better, now it is
difficult to understand what it really is.

P14 L275: "...that are the furthest distance apart" ->
"...that are the furthest distance apart from each other"

P14 L274: segment width: why is segment width estimated
by dividing the area by a circle length? Do You mean
circle diameter rather?

P15 L 295: "This article is an introduction to the
algorithm..." -> "This article includes a technical
description of the algorithm..."

P16 Fig. 10: Some colors are difficult to distinguish in
a printed version at least.

P16 Fgi. 10 caption: "...that fail one of the tests."
Do You mean "...that fail one or more of the tests."
or "...that fail at least one of the tests."?

P18 Fig. 12: The panels are marked with capital letters
unlike in other figures. Use a-d instead of A-D here.

P19 Fig 13: Possibly Fig. 13d could have a legend
explaining all the 8 colors in it.

P21 L 369-370: "...Heinemann [37] analysis is 369
presented in"
In where? Is here something missing?

P24 Figs. 17-18: The printed figures are small and
it is difficult to see details. Could these be larger
or left to the follow-up analysis manuscript where
they could be larger?

It would also be interesting to see some results for
the melt period data and to evaluate how good or
poor the performance would be. At least some kind
of an estimate. You probably have tried the algorithm
also outside Jan-April?

P25 L405-406: "The design philosophy of our algorithm
is to minimize the errors of commission; i.e., to
minimize 405 overestimation of leads."
Why not minimize the total error i.e. sum of
false lead detections + missed leads? Would this
change something essentially?

P27 L504: "We did not attempt to define to minimum.."
-> "We did not attempt to define the minimum.."
This would be interesting to verify with some higher
resolution data.

Could sea ice drift better be taken into account?
Small drift (or image geolocation inaccuracies)could
probably be compensated by applying a morphological
dilation to the lead candidates in each image. Larger
ice drift could be estimated e.g. by maximum
normalized cross-correlation method between two
overlapping images. These could be mentioned as
ideas for further development.

Sincerely,

Author Response

1.       In general, there are many tests and phases included in the proposed algorithm. The selection of the parameters seems quite heuristic and has not very well been described.

We have reviewed algorithm description sections and added some more description.


2.       There are e.g. many threshold values. The selection of the parameters should be described in more detail: on which data set they are based (an artificial data set or on real MODIS (which part of) data), what would be the effect of varying each parameter. At least it should be said that e.g. "parameter A was selected experimentally based on data set B".

Text has been added to state that thresholds were derived through a combination of manual interpretation of the MODIS imagery from case studies and by using the synthetic test case.


3.       In the revised manuscript the results have been compared to two other methods. This is a good addition. It would still be good to have some kind of comparison to interpretations of some high-resolution optical/IR data (over suitable small areas with data possibly available). It may be difficult to find suitable reference data because of the Arctic darkness during the winter, but if any possible data could be found some information on how well the method is capable of detecting narrow leads could be obtained and something could be said e.g. on what is the lower limit of lead width detectable by the method.

The effect of pixel size on lead statistics has been examined previously.  The results of those studies have been summarized with quantitative examples in new text in the 2nd paragraph of discussion section.


4.       The method concentrates on the winter time. It would also be nice to get some kind of impression on how the method works in wet surface conditions (melting period).

See 6th paragraph of discussion section. We acknowledge that one of the weaknesses of the algorithm is that it was not designed to work in the summer months. The thermal contrast between leads and the surrounding ice would be very different and the surface can be more complex. Melt ponds, for example, may appear in the summer and these could potentially fool the algorithm. When the ice surface temperature approaches the water temperature, the algorithm would not detect thermal contrast and therefore not detect leads. Persistent cloud coverage in the warmer months is another factor for the limited period of coverage.


Some more detailed comments:

5.       P3, Fig. 1 caption: "north of 65" -> "north of latitude 65N"

Caption has been corrected.


6.       P4-5, Table 1 colors coding: Not all the colors can well be distinguished from each other (in a printed version), e.g. the dark blue colors and black. Consider replacing the colors with better distinguishable ones.

All the color in this table are used other figures in the manuscript. Changes of the color code would require updating all other figures, which is not feasible at this time. A product user is free to generate plots with a different color enhancement, Table 1 still is useful in describing the various codes that appear in the output products.


7.       P6, Fig. 3: Include the corresponding gray-scale image as a separate panel in Fig. 3 or indicate it by a polygon in Fig. 1.

Figure 3 is used to demonstrate the function of the spatial figure we applied; the filter is applied in the native satellite projection (not the projection used in Fig 1). The geolocation of Figure 3 is not important to demonstrating the technique. In fact, what appears as a rectangle in Figure 3 (native satellite projection) would appear as a rotated and skewed quadrilateral in Figure 1; we believe this would cause more confusion. For these reasons, we decided not to indicate its location in Fig 1. Adding a second panel to Fig 3 would make the figure more complex and take away from the intended purpose of the figure.  We do include the notation that the cloud mask is part of the granule shown in figure 1.


8.       P6 L131-132: What is the reason for comparing the brightness temperature to its standard deviation?

The pixels with anomalously high brightness temperature are leads; we are looking for high thermal contrast which meet two of the following requirements: 1. 1.5 K higher than the mean; 2. higher than the mean by one standard deviation for cases that have more spatial heterogeneity (standard deviation higher than 1.5 K).


9.       P7 L134: "...of the near-infrared (NIR) brightness and..." -> "...of the near-infrared (NIR) brightness temperature and..."

It was an error. We deleted “the near-infrared (NIR) brightness and”.


10.   P7 L137: "...nominal freezing point of salt water." Salt water of which salinity? Give a reference.

Changed to “nominal freezing point of sea water with salinity of 35 ppt. This is common knowledge and no reference is needed.


11.   P10 L187: a threshold of two pixels has been, how has this been selected?

For the Hough Transform, a line must contain at least 3 points to be identifiable; we reject features with 2 points or less because we can not detect it as a linear feature.


12.   P10 L181: "...lead mask (panel b)" -> "...lead mask (Fig. 7, panel b)"

Revised.


13.   P L185: "The region is considered open water..." -> "The region is considered to be open water..." or possibly "as".

Revised.


14.   P12, Sobel filter: Could binary morphological dilation with a suitable dilation mask be applied instead?

Yes, that might also work; it is not something we’ve tried.


15.   P12: Many filter (thresholds) parameters are presented here, but their selection is somewhat unclear. Please, include the information on how the parameters were selected. It would probably be convenient to include a table of all the algorithm parameters, their suitable values and on how they were selected.

Text added for clarification, adding further description or pseudo-code to table 1 was considered, but we decided that the added information might actually make it harder rather than easier to read.


16.   P13 L 238-239: Hought Transform (HT) is used. Please, include the assumption why HT has been applied. I think it can be something like: "The leads can be assumed to be polylines and for this reason HT locating linear in imagery has been applied."

Yes, this is a good idea, text added for clarification.


17.   P13: There are also some parameters (thresholds) and information on their selection method would be preferable.

See point 15.


18.   P13 L257: "A morphological erosion..." what is the mask of the morphological erosion, is it a 3x3 rectangular block or a circular block or something else?

Yes, 3x3 square array notation added to the text


19.   P14 L274-275: Explain "region code" better, now it is difficult to understand what it really is.

Yes, to clarify “region” is replaced with “sea” when referring to a specific geographic location.


20.   P14 L275: "...that are the furthest distance apart" -> "...that are the furthest distance apart from each other"

Revised.


21.   P14 L274: segment width: why is segment width estimated by dividing the area by a circle length? Do You mean circle diameter rather?

The great circle distance is the distance between two points following the curvature of the earth; that length divided by area is width (width x length = area)


22.   P15 L 295: "This article is an introduction to the algorithm..." -> "This article includes a technical description of the algorithm..."

Revised as “This article presents a technical description of the algorithm”


23.   P16 Fig. 10: Some colors are difficult to distinguish in a printed version at least.

See comment #6


24.   P16 Fgi. 10 caption: "...that fail one of the tests." Do You mean "...that fail one or more of the tests." or "...that fail at least one of the tests."?

Once one test fails, the processing loop is exited, no further testing is done.


25.   P18 Fig. 12: The panels are marked with capital letters unlike in other figures. Use a-d instead of A-D here.

This was done to reinforce that the Fig 12 is a subset of the larger Fig 11; therefore Fig 11 uses capital letters and 12 uses lower case versions of the same letters.


26.   P19 Fig 13: Possibly Fig. 13d could have a legend explaining all the 8 colors in it.

We believe that adding a legend to panel d would mean that we should have a legend for all panels, and this would result in an overly complex figure.  We believe the text description of the color is sufficient, and that the primary colors and the combination of primary colors is a fairly well understood concept.


27.   P21 L 369-370: "...Heinemann [37] analysis is 369 presented in" In where? Is here something missing?

It is “A comparison of the time-series of our results and the Willmes and Heinemann [37] analysis is presented in Table 2.”


28.   P24 Figs. 17-18: The printed figures are small and it is difficult to see details. Could these be larger or left to the follow-up analysis manuscript where they could be larger?

Yes, the idea was to just introduce the capability in this paper and we expect to follow-up this analysis in another manuscript.


29.   It would also be interesting to see some results for the melt period data and to evaluate how good or poor the performance would be. At least some kind of an estimate. You probably have tried the algorithm also outside Jan-April?

The analysis is for January to April only. We have some discussion on this limitation in the manuscript.


30.   P25 L405-406: "The design philosophy of our algorithm is to minimize the errors of commission; i.e., to minimize 405 overestimation of leads." Why not minimize the total error i.e. sum of false lead detections + missed leads? Would this change something essentially?

Total error and in particular error of omission would be hard to quantify given the lack of validation truth.  Even comparing against a high resolution imager or microwave leads product, truth would be difficult to assess given differences in cloud coverage, overpass time, leads changing throughout the day, etc.  We believe errors of commission are relatively easy to identify – if we detect an object that doesn’t look like a lead, it is an error of commission.  But errors of omission are hard because we don’t know if the error was caused by a cloud (cloud-free in the “truth” dataset but cloud-obscured in our dataset), or if it may have been a false positive in the “truth” dataset (for example a cloud edge that was misclassified as a lead in the “truth” dataset).


31.   P27 L504: "We did not attempt to define to minimum.." -> "We did not attempt to define the minimum.." This would be interesting to verify with some higher resolution data.

Revised


32.   Could sea ice drift better be taken into account? Small drift (or image geolocation inaccuracies) could probably be compensated by applying a morphological dilation to the lead candidates in each image. Larger ice drift could be estimated e.g. by maximum normalized cross-correlation method between two overlapping images. These could be mentioned as ideas for further development.

Tracking leads would be a good topic for future research.


Round 2

Reviewer 1 Report

Review of re-submitted manuscript by J. Hoffmann et al.


Summary 


The paper has improved to some degree as it now adresses some of the questions that I raised in my first review. However, I have to conclude that the paper is not yet in a shape that merits publication, which is mainly a consequence of a) illustration that needs improvement and b) confusing explanations regarding the complex methodology and overall conclusions. The paper adresses some interesting approaches and the final data will be very useful to compare with existing products. But further improvements are required.  

I suggest to authors carefully work over the readability of their manuscript and their argumentation before the paper can be published. To provide some guidance I will resume my critics in the following....


Comments and questions


Supplement: I am still not convinced by the presentation of the final data in this format. The authors state that "... the location of that lead changes to white and then fades away as the detection ages (Legend: Leads from previous day(s))". This is very unspecific. What means "then fades away"? For how many days after its detection will a lead be visible in the animation? This is crtitical to my understanding and key to interpreting what is being shown. It would be more fair to show the lead maps day by day because the current format implies persistence without providing argumentation for it.


Please check the layout of your manuscript! Figure 6 appears 3 times in my PDF... :(


Figure 1: Please add graticules instead of stating that "the North Pole is in the center of the projection" in the figure caption. If brightness temperature is shown, then a colorbar indicating the values should be added.


Table 1: I think this overview can be significantly reduced without losing important information. In the end it is 28% leads, 55% clouds and the rest does not pass the tests, mainly due to size (8%) and width (6%) of objects. The rest of omissions (3%) could be combined to one class. The "No coverage" class is redundant in my opinion.


Figure 3: Figure caption is not in required text style according to MDPI? The reader remains clueless about where this subset is located and how many pixels are shown. A colorbar could be added to improve readability.


P7, line 134: The approach did not use NIR data to detect leads... only for validation of results.


P8: Remove Figure 6 here.


P8, line 154: "The majority of the features that appear to be leads correspond to a color that indicates that the detection of the thermal feature occurred in multiple overpasses within the same day." I cannot understand what is meant here because the different color coding was not yet introduced at this position of the paper. 


When you talk about "The daily composite mask" I would suggest to rather call this a map or an array, not a mask.


Sections 2.1.2 and 2.1.3 are really hard to follow by reading the text. Please consider re-writing in a more comprehensive way.


P10, line 183 and line 185: "might be open water rather than a lead" This leaves me confused. Leads can be open water! You even state this yourself in the Introdcution. So why is open water discarded here?


Figure 7: The pixle length of the presented arrays should be indicated. Please describe more adequately that this an iterative process, where the largest object is considered in each new iteration. I needed to read the section 3 times before I understood that concept.


Figure 8: I disagree. If the interruptions in your example will be a bit larger, the Sobel filter won't connect the individual lines. The same will hold for lead objects. (Figure 8 is redundant in my mind)


P12, line 232: remove "circle" once. Sentence is hard to read (as many others)


P13, line 258: "is a mask THAT contains..."


P13, line 262: The reader does not know what text files you are talking about, when the product has not been mentioned yet.


Figure9: Figure caption, first sentence hard to understand. What is the advantage of subsetting the long right branch of the lead in so many different sub-branches?  This is defintiely critical for  the length and orientation statistics provided later.


Figure 10: All the objects that do not pass the test are shapes that one would not expect as potential lead objects.


P17, line 339: "more aggressive"??

... lines 340-341: Please provide an example or Figure reference.


Figure 11: please add the relevant color code as a legend.


Figure 12: (B): Please use colors wither better contrast to show the difference between leads and artifacts. (same for Figure 13 (b))


P21, line 370: Something is missing here.


Table 2: Caption: This is rather a statistical comparison than a "time series" (also in text). What is the "left set of columns"?


P25, line 407: "...is in better 406 agreement with the Willmes and Heinemann [37] analysis." As compared to what?


... line 412: "...cannot 411 process a location for leads" Why? The location of leads is deductable straight from the map.


... lines 419-420: "The difference in lead area is largely a result of the linear identification techniques that our algorithm employs". When I read the paper of Willmes and Heinemann I see that they use an eccentricity (solidity) parameter to distinguish between leads and artifacts.


... line 424: "Paul et al [40]..." is not a suitable reference here.

... line 426: "the thermal contrast conditions may not be sufficiently large near the edge of the sea ice or near the shore". I disagree. The shape and width tests in the algorithm are probably the cause for polynyas not being represented in the final product.


P26: Remove Figure 6 (again)






Author Response

Comments and questions

1.       Supplement: I am still not convinced by the presentation of the final data in this format. The authors state that "... the location of that lead changes to white and then fades away as the detection ages (Legend: Leads from previous day(s))". This is very unspecific. What means "then fades away"? For how many days after its detection will a lead be visible in the animation? This is crtitical to my understanding and key to interpreting what is being shown. It would be more fair to show the lead maps day by day because the current format implies persistence without providing argumentation for it.

The animation is meant to be used for qualitative not quantitative analysis. For qualitative analysis, the same information is plotted in Fig 14 & 15 or the data files are available for download. In the animation, the historical location of a lead is shown for 10 days, it is red on the day it is detected, white on day 2, the shading becomes increasingly darker gray on days 3-10, and disappears on day 11. The frame rate of the animation is so fast that leads would not be identifiable if they were only shown on the day it is detected, it is also too fast to distinguish shades of gray; only the fading affect is meant to be apparent.


2.       Please check the layout of your manuscript! Figure 6 appears 3 times in my PDF… L

Editorial error has been corrected.


3.       Figure 1: Please add graticules instead of stating that "the North Pole is in the center of the projection" in the figure caption. If brightness temperature is shown, then a colorbar indicating the values should be added.

Longitude and north pole notations have been added to the figure. A colorbar is not included because a nonlinear enhancement has been used on the brightness temperature to enhance the thermal contrast of the leads; the actual temperature is not as important as the relative contrast shown in the image.


4.       Table 1: I think this overview can be significantly reduced without losing important information. In the end it is 28% leads, 55% clouds and the rest does not pass the tests, mainly due to size (8%) and width (6%) of objects. The rest of omissions (3%) could be combined to one class. The "No coverage" class is redundant in my opinion.

We believe there is value added by preserving several different rejection categories.


5.       Figure 3: Figure caption is not in required text style according to MDPI? The reader remains clueless about where this subset is located and how many pixels are shown. A colorbar could be added to improve readability.

The font style has been updated for all figure captions. The image is approximately 660 pixels by 480 pixels, the imagery is in the native satellite projection – it has not been reprojected into the same projection the product uses because this is consistent with how the algorithm works, the cloud spatial filter is applied to the data before any remapping is done, the filtering is done with the spatial units being pixels rather than km (pixel size is near 1 km given the constant on scan angle of less than 30).  We believe the location of this feature is on a map is irrelevant, it is shown only to illustrate the filter concept.  If a reader is interest more in this concept, a reference is given to a paper that describes the issue in more detail.  Text has been added to the caption for clarification of the colors, but we believe a color bar would make the image more busy.


6.       P7, line 134: The approach did not use NIR data to detect leads... only for validation of results.

Deleted “near infrared (NIR) brightness and” from P7 line 134.


7.       P8: Remove Figure 6 here.

Editorial error has been corrected.


8.       P8, line 154: "The majority of the features that appear to be leads correspond to a color that indicates that the detection of the thermal feature occurred in multiple overpasses within the same day." I cannot understand what is meant here because the different color coding was not yet introduced at this position of the paper.

Added “(not red in Figure 5)” to  P7 line 153.


9.       When you talk about "The daily composite mask" I would suggest to rather call this a map or an array, not a mask.

That is a good point, mask has been changed to map where appropriate.


10.   Sections 2.1.2 and 2.1.3 are really hard to follow by reading the text. Please consider re-writing in a more comprehensive way.

The sections have been edited for clarity.


11.   P10, line 183 and line 185: "might be open water rather than a lead" This leaves me confused. Leads can be open water! You even state this yourself in the Introdcution. So why is open water discarded here?

Yes, we understand the confusion, sentence clarified: “…object is a non-lead open water feature (e.g. polynya)”.


12.   Figure 7: The pixle length of the presented arrays should be indicated. Please describe more adequately that this an iterative process, where the largest object is considered in each new iteration. I needed to read the section 3 times before I understood that concept.

Yes, text has been added to describe the box as 200x200km and further clarify that it is an iterative process.


13.   Figure 8: I disagree. If the interruptions in your example will be a bit larger, the Sobel filter won't connect the individual lines. The same will hold for lead objects. (Figure 8 is redundant in my mind)

Figure 8 has been removed; it was redundant given Figure 7 (d) shows an example of the filter being applied to real data.


14.   P12, line 232: remove "circle" once. Sentence is hard to read (as many others)

Errant “circle” removed.


15.   P13, line 258: "is a mask THAT contains..."

Word “that” added.


16.   P13, line 262: The reader does not know what text files you are talking about, when the product has not been mentioned yet.

Word “output” twice for clarification.


17.   Figure9: Figure caption, first sentence hard to understand. What is the advantage of subsetting the long right branch of the lead in so many different sub-branches?  This is defintiely critical for  the length and orientation statistics provided later.

Yes, branch statistics will have a low bias because  - as you noticed – there can be numerous branches.  This is why bulk characteristics are also included in the output, statistics of the bulk characteristics would not have the same small bias issue.


18.   Figure 10: All the objects that do not pass the test are shapes that one would not expect as potential lead objects.

Correct, the algorithm successfully rejects shapes that one would not expect to be leads.  The figure is shown to illustrate the technique was tested with synthetic (and real) data.


19.   P17, line 339: "more aggressive"??

The “more aggressive” description is not appropriate and has been removed.


20.   ... lines 340-341: Please provide an example or Figure reference.

This is illustrated Fig 12 & 13, reference added in the text.


21.   Figure 11: please add the relevant color code as a legend.

As indicated, thhe legend is available in Table 1, we believe adding a legend to the figure would make it more busy (a legend was included in an earlier draft was removed in response to reviewer feedback).


22.   Figure 12: (B): Please use colors wither better contrast to show the difference between leads and artifacts. (same for Figure 13 (b))

We believe the color contrast between red and black is sufficient.


23.   P21, line 370: Something is missing here.

Sentence edited.


24.   Table 2: Caption: This is rather a statistical comparison than a "time series" (also in text). What is the "left set of columns"?

“Time series” replaced with “statistical difference of lead products”


25.   P25, line 407: "...is in better 406 agreement with the Willmes and Heinemann [37] analysis." As compared to what?

Sentence rewritten for clarity.


26.   ... line 412: "...cannot 411 process a location for leads" Why? The location of leads is deductable straight from the map.

Changed “location” with “statistics” to clarify the point.


27.   ... lines 419-420: "The difference in lead area is largely a result of the linear identification techniques that our algorithm employs". When I read the paper of Willmes and Heinemann I see that they use an eccentricity (solidity) parameter to distinguish between leads and artifacts.

Clarified by adding the word “different” to describe the linear identification techniques.


28.   ... line 424: "Paul et al [40]..." is not a suitable reference here.

The reference is for the description of polynya not detection technique. Sentence delete, source attributed to previous sentence.


29.   ... line 426: "the thermal contrast conditions may not be sufficiently large near the edge of the sea ice or near the shore". I disagree. The shape and width tests in the algorithm are probably the cause for polynyas not being represented in the final product.

Agree, that part of the sentence has been deleted.


30.   P26: Remove Figure 6 (again)

Editorial error has been corrected.


 


Reviewer 2 Report

See the attached file

Comments for author File: Comments.pdf

Author Response

1.       The manuscript is significantly improved and most of my previous comments have been addressed. I very strongly suggest the authors remove figure 6 if it cannot be made to look more professional.

We believe it is important to show Figure 6, it is important in illustrating a point in Section 2.1.1 and is also referenced in the discussion section. The color enhancement in Figure 6 is the same as Figure 5, which we believe is important to emphasize the point that many features that are classified as potential leads are detected in only one overpass (red in Fig 5) but occur in regions with multiple cloud-free overpasses (not red in Fig 6).  It is important to show the combination of the satellite overpass coverage and pattern of cloud coverage which represents the coverage area of our product.  It is true that the patterns generated are not aesthetically pleasing to look at – this is an artifact of the satellite coverage patterns - but we still believe the results are important to show.


2.       The authors did include a table comparing their method to the method of Wilmes and Heinemann, but there is really only one sentence discussing this table, and some of the columns (the three right columns) are not explained in either the text or the table caption. The authors should clearly explain each column of the table.

A couple of sentences have been added to further describe Table 2.


Minor comments

3.       line 45, need comma after ‘LandSat’

Comma added.


4.       lines 43-53 - I don’t think the reference to the ice concentration algorithm from Drue and Heine- ¨ mann should be included because it deals with ice concentration, which is different from lead detection.

This reference is included because leads can be identifiable in ice concentration products as areas of lower ice concentration. We investigated using sea ice concentration as a basis for potential lead detection - sentence about this added to the discussion section.


5.       line 66 - may want to add that Murashkin et al. used a random forest classifier

Changed to “Their lead classification algorithm uses a random forest classifier based on polarimetric features and textural features.”


6.       line 84 - extra ‘the’

Deleted


7.       Table 1 may be more appropriate in an Appendix

The color code in table 1 is referred to in other figures in the manuscript and therefore we decided to keep is as table 1 rather than in an appendix.


8.       line 232 - green circle part of the sentence should be in brackets

Sentence contained an extra word “circle”, the word has been deleted and it is now more clear.


9.       line 240 should be ‘lead has been detected. For example, the transform....’

Compound sentence split into two sentences.


10.   lines 369-370 - a stand alone sentence should not be a paragraph

Paragraph has been edited for clarity.


11.   lines 457-461 - a stand alone sentence should not be a paragraph

We do not find any paragraphs with only one sentence in the manuscript.

Back to TopTop