Next Article in Journal
Detecting the Surface Signature of Riverine and Effluent Plumes along the Bulgarian Black Sea Coast Using Satellite Data
Next Article in Special Issue
Detection of Abnormal Vibration Dampers on Transmission Lines in UAV Remote Sensing Images with PMA-YOLO
Previous Article in Journal
Forest Cover and Sustainable Development in the Lumbini Province, Nepal: Past, Present and Future
Previous Article in Special Issue
Direct Aerial Visual Geolocalization Using Deep Neural Networks
 
 
Article
Peer-Review Record

KappaMask: AI-Based Cloudmask Processor for Sentinel-2

Remote Sens. 2021, 13(20), 4100; https://doi.org/10.3390/rs13204100
by Marharyta Domnich 1,2,*, Indrek Sünter 1, Heido Trofimov 1, Olga Wold 1, Fariha Harun 1, Anton Kostiukhin 1, Mihkel Järveoja 1, Mihkel Veske 1, Tanel Tamm 1, Kaupo Voormansik 1,3, Aire Olesk 3, Valentina Boccia 4, Nicolas Longepe 4 and Enrico Giuseppe Cadau 4
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Remote Sens. 2021, 13(20), 4100; https://doi.org/10.3390/rs13204100
Submission received: 19 August 2021 / Revised: 3 October 2021 / Accepted: 8 October 2021 / Published: 13 October 2021
(This article belongs to the Special Issue Computer Vision and Deep Learning for Remote Sensing Applications)

Round 1

Reviewer 1 Report

  • Author(s) presented how KappaMask provides an accurate 10 m classification map for Sentinel-2 Level-2A and Level-1C products. It has been trained on opensource dataset and fine-tuned on Northern European terrestrial dataset which was labelled manually using active learning methodology. However, there is no uniqueness claimed as their originality in the paper.
  • The RELATED WORKS and ORIGINALITY not enough, it needs further discussion.  
  • Still I can't see any novel (Proposed) idea implicit in this work, combination of Kappamask and U-Net not enough.
  • This paper is more application than scientific
  • The technical discussion is clearly described.

Author Response

Dear Reviewer,

Thanks for your comments. We added more discussion on active learning methodology, 10 m resolution cloud mask datasets and provided novelty discussion of our work (L112-L150).

Reviewer 2 Report

Many thanks to the Authors, the article describes conducted work in clear style and order. It was a pleasure to discover the text proposed for review.
In general the text is a high quality and have to be published in Remote Sensing journal without any doubts.
In the meaning of criticism, I can propose to think on next small corrections (Authors are fully free to decide are these corrections needed or not):
- page 3 row 132, page 4 row 147, page 5 row 201 - Authors apply machine learning terminology and denote the bands of satellite image as features, however no any comments are given on terminology application, and it can confuse slightly a readers who are not familiar with ML terminology; probably it is better to denote the bands as bands in this section or to give some comment, like "in terms of machine learning we will entitle image bands as features";
- page 4 row 143 - the ground surface distance abbreviation (GSD) is applied without decryption, while before and after the Spatial Resolution term is applied; probably it is better to harmonize used terminology and(or) to give the abbreviation decryption if it is needed;
- figure 2 - cloud classification is applied to classify tiles used as training dataset, but nowhere in the text the classification is used and(or) commented; probably it is better to apply some other classification here (cloud cover percentage classification or something else), or to comment what for this one is applied;
- page 7 row 263, page 8 row 277 - the terms Patch and Sub-tile are applied, while in upper text the term Tile is used for the same purposes; probably it is better to harmonize used terminology and to give the abbreviation decryption if it is needed.

Author Response

Dear Reviewer,

Many thanks for your feedback! We are happy that you have found it with excitement!
Thanks for the suggested points, we took them all into consideration:

Point 1: page 3 row 132, page 4 row 147, page 5 row 201 - Authors apply machine learning terminology and denote the bands of satellite image as features, however no any comments are given on terminology application, and it can confuse slightly a readers who are not familiar with ML terminology; probably it is better to denote the bands as bands in this section or to give some comment, like "in terms of machine learning we will entitle image bands as features";

Response 1: The sentence that clarifies that bands are referred as features in machine learning terminology is added in L176-L177 (page 4)

Point 2: page 4 row 143 - the ground surface distance abbreviation (GSD) is applied without decryption, while before and after the Spatial Resolution term is applied; probably it is better to harmonize used terminology and(or) to give the abbreviation decryption if it is needed;

Response 2: The abbreviation is removed, and text is harmonized by using Spatial Resolution term.

Point 3: figure 2 - cloud classification is applied to classify tiles used as training dataset, but nowhere in the text the classification is used and(or) commented; probably it is better to apply some other classification here (cloud cover percentage classification or something else), or to comment what for this one is applied;

Response 3: We wanted the dataset to contain equal amount of main different cloud types (stratus, cumulus, and cirrus) in order to have a good representation of data domain. By choosing data carefully rather than randomly, we can work with less data in the end. We now justify the classification used in figure 2 in the text.

Point 4: - page 7 row 263, page 8 row 277 - the terms Patch and Sub-tile are applied, while in upper text the term Tile is used for the same purposes; probably it is better to harmonize used terminology and to give the abbreviation decryption if it is needed.

Response 4: The terminology is harmonized now by using sub-tile instead of patches.

Reviewer 3 Report

The Introduction should be improved. Some details are missing for understanding, other parts of the manuscript should be presented more clearly. Some choices should be better motivated. Some choices in data setup and experiment evaluations are questionable or not clearly explained. And since the proposed method is based on deep learning, at least one deep-learning based cloud detection method that can process cloud masks at 10m resolution should be included also as a baseline for comparisons.

Therefore I recommend a major revision. Please see attached PDF for detailed comments and suggestions how to improve the manuscript.

Comments for author File: Comments.pdf

Author Response

Dear Reviewer,

Thanks a lot for your thoughtful criticism!
As main points we added deep learning method cloud detection to evaluation comparison; rewrote the Introduction ending by adding new paragraphs and highlighting novelty; restructure Results section and tried to add more clarity into the Methods section. Please see attached PDF for per-point answers.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Thank you to the authors for having addressed my comments thoroughly and in a satisfactory way. Only a few minor observations remain on my behalf.

Table 3 : the best performing method for Cloud shadow is Sen2Cor, 87 % should be in bold instead of 82% from KappaMask L2A
Table 7 : the best performing method for Precision on Clear class is KappaMask L1C, 79 % should be in bold instead of 75% for KappaMask L2A

Captions of Tables 6, 7 and 8 : remove Sen2cor and MAJA, add DL_L8S2_UV

DL_L8S2_UV has exactly the same Recall and Overall accuracy values for Clear, Cloud and All classes (Table 8 and 9). This is a bit suspicious and could be a copy-paste mistake, please double check.

Reference list : several citations in References section contain et. al., please replace with the full list of authors.

The low scores of MAJA (51 % for clear and clouds) still puzzles me, and I am looking forward to the potential continuation of your research towards accurracy assessment on a random dataset with confidence intervals.

Author Response

Dear reviewer,

Thanks a lot for your observations and criticism, we were really happy to improve the quality of the publication. 
Points:
1. Bold for the best performing method in Table 3 and Table 7 is fixed.
2. Captions for Table 6, 7, 8 and 9 are fixed.
3. There were indeed copying issue with accuracy tables (recall numbers were transferred two times). Table 5 and Table 9 are fully updated. Thanks for pointing it out!
4. Reference list: full list of authors is added instead of et al.

Back to TopTop