Next Article in Journal
Long-Term Performance Evaluation of the Latest Multi-Source Weighted-Ensemble Precipitation (MSWEP) over the Highlands of Indo-Pak (1981–2009)
Next Article in Special Issue
Targeting Plastics: Machine Learning Applied to Litter Detection in Aerial Multispectral Images
Previous Article in Journal
Retrieval of Stratospheric Ozone Profiles from Limb Scattering Measurements of the Backward Limb Spectrometer on Chinese Space Laboratory Tiangong-2: Preliminary Results
Previous Article in Special Issue
A Combination of Machine Learning Algorithms for Marine Plastic Litter Detection Exploiting Hyperspectral PRISMA Data
 
 
Article
Peer-Review Record

Detection of Waste Plastics in the Environment: Application of Copernicus Earth Observation Data

Remote Sens. 2022, 14(19), 4772; https://doi.org/10.3390/rs14194772
by Samantha Lavender
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2022, 14(19), 4772; https://doi.org/10.3390/rs14194772
Submission received: 1 July 2022 / Revised: 11 August 2022 / Accepted: 15 September 2022 / Published: 23 September 2022
(This article belongs to the Special Issue Remote Sensing of Plastic Pollution)

Round 1

Reviewer 1 Report

Dear author

I have read with interests your paper dealing on “Detection of Waste Plastics in the Environment: Application of Copernicus Earth Observation data”, which is crucial considering this increasing environmental issue. The main objective of your paper is the development of a method for automatically detecting a variety of plastic waste across aquatic and terrestrial environments using sentinel images. The method has been tested in five test sites.

The introduction is well written, illustrating the need of developing new techniques to monitor plastic litter.

The method is also well presented consisting in different steps: pre-processing, calculation of thematic indices, improved shadow masking, neural network and reclassification. In addition, the correctness of the classification has been assessed using specific factors.

I will suggest the authors to include a diagram flow to clearly illustrate the different operational phase and maybe to add information regarding the manual digitalization and also regarding the resolution. What are the dimensions of the objects that can be “correctly identified”?

Compared to the method section, the results sections is short and focuses on 1) Accuracy Statistics of the Training Process and 2) Application to the Test Sites. May be some additional results can be added for instance data before the reclassification.

Finally the discussion needs to be improved, maybe by discussing the advantages of this method compared to similar remote sensing methods, or the inconvenient of such method (dimension of the plastic objects identified)….

Best regards.

Author Response

>I will suggest the authors to include a diagram flow to clearly illustrate the different operational phase and maybe to add information regarding the manual digitalization and also regarding the resolution. What are the dimensions of the objects that can be "correctly identified"?

  • A flow diagram has been added as Figure 13 within the Discussion.
  • Examples of digitizing have been added as Figure 3, with the further explanation given.
  • Discussion on the size of accumulations that can be detected is in the last paragraph of the Discussion – lines 572 to 574.

>Compared to the method section, the results sections is short and focuses on 1) Accuracy Statistics of the Training Process and 2) Application to the Test Sites. May be some additional results can be added for instance data before the reclassification.

More information has been added to the Results through a section on the impact of the Decision Tree alongside further investigation of the Accuracy Assessment.

>Finally the discussion needs to be improved, maybe by discussing the advantages of this method compared to similar remote sensing methods, or the inconvenient of such method (dimension of the plastic objects identified)….

Both the Discussion and Conclusions sections have been improved. The Discussion now reviews the broader/supporting activities, including references, while the Conclusions have been formulated into a list.

Reviewer 2 Report

The idea is interesting and the manuscript is well-written. However, unfortunately, I have not found any novelty or I could not find any contribution to the field.

Some comments I hope you follow it and make some modifications on your manuscript for the future, which are:

1) The contribution of this study is not clear. If the contribution is the synergy of Sentinel-2 and Sentinel-1 satellites, then the author has to show the results from this synergy. This synergy is not shown at all.

2) lines 107-113: This suits to methodology section.

3) line 236: the sentence lacks a verb

4) line 285: How was the digitization performed on both Sentinel-1 and Sentinel-2 data? Perhaps the author can show an example with annotations in sentinel-2 and sentinel-1.

5) line 291: Which is the total number of the utilized satellite images for the training/validation dataset? Perhaps the author can include a table which describes the utilized data/ S2 Tiles.

6) line 301. The author may explain more how/ why these classes were selected.

7) line 367: the legend is missing

8) line 367: The author may show an example in an aquatic site.

9) line 428. The evaluation of the methodology is performed only qualitatively. Plastics detection is a very challenging task. There are many difficulties that have been already identified by the research community. Additionally, Table 5 shows unbelievably very high scores. I am not sure if this model can be generalized. Perhaps this has to do with the way the split was done. To avoid overfitting, the author has to keep the whole scenes in the same dataset (training or validation).

10) lines 476-477: There is a strong claim here which is not supported at all. How Sentinel-1 improves the overall result? Finally, is this “synergy” of these two satellites helpful for plastics detection? The author has to show classification results with Sentinel-1, and the same results without Sentinel-1 in order to compare them and evaluate this synergy.

11) Last but not least, the discussion section is too short, and the challenges and insights are not discussed at all. There is a big introduction about the plastic pollution issue, some of which are already well-known. The related work is insufficiently described. So far, several machine learning algorithms for plastics detection on Sentinel-2 data have been proposed, but they are not mentioned.

Author Response

>The idea is interesting and the manuscript is well-written. However, unfortunately, I have not found any novelty or I could not find any contribution to the field.

I'm sorry my preparation of the paper has not been clear enough that the reviewer can see the novelty or contribution to the field. I have tried to make this clearer by expanding the discussion/conclusion.

Some comments I hope you follow it and make some modifications on your manuscript for the future, which are:

1) The contribution of this study is not clear. If the contribution is the synergy of Sentinel-2 and Sentinel-1 satellites, then the author has to show the results from this synergy. This synergy is not shown at all.

The usage of Sentinel-1 is not the primary contribution of the paper as it was already concluded for the previous paper, Page et al. [21], and has also been the conclusion of other papers – examples added to the Conclusions section. The synergy has been further shown by adding Figure 3, which shows the Sentinel-1 SAR backscatter and Sentinel-2 pseudo-true colour composite to show how the presence of plastics influences backscatter. Also, the Discussion now brings the supporting information together.

2) lines 107-113: This suits to methodology section.

The text has been moved, with the text explaining ANNs moved to the start of Section 2.2.4 and the sentence on Sentinel-1/-2 usage to lines 301 to 303.

3) line 236: the sentence lacks a verb

Agreed and updated.

4) line 285: How was the digitization performed on both Sentinel-1 and Sentinel-2 data? Perhaps the author can show an example with annotations in sentinel-2 and sentinel-1.

It is now shown in Figure 3 and the associated text.

5) line 291: Which is the total number of the utilized satellite images for the training/validation dataset? Perhaps the author can include a table which describes the utilized data/ S2 Tiles.

The total training/validation dataset has been provided in Table A1 (Appendix).

6) line 301. The author may explain more how/ why these classes were selected.

The decision on which classes had their values decreased versus increased was based on their original percentages; applied to classes that are greater or less than 5.6%, which equated to the total number of pixels divided by the number of classes. 

7) line 367: the legend is missing

A legend has been added.

8) line 367: The author may show an example in an aquatic site.

To prevent all open water pixels from being flagged as shadow, the ISI layer was adjusted using NDWI2. As a result, the shadow mask is not triggered over water, which was not foreseen as an issue because the miss-classification of plastic due to shadow is not an issue encountered over water. I have clarified this in the paper.

9) line 428. The evaluation of the methodology is performed only qualitatively. Plastics detection is a very challenging task. There are many difficulties that have been already identified by the research community. Additionally, Table 5 shows unbelievably very high scores. I am not sure if this model can be generalized. Perhaps this has to do with the way the split was done. To avoid overfitting, the author has to keep the whole scenes in the same dataset (training or validation).

In training, a non-random approach was chosen for splitting the training and validation data, which means there is no overlap in terms of the same data being used for training and validation. However, the high values in Table 5 were investigated, and they are impacted by the limited number of images used for the validation. Therefore, the section has been expanded to include Table 6, where an additional five sites (shown in Table A1) were included in the accuracy assessment, creating a more diverse validation dataset.

10) lines 476-477: There is a strong claim here which is not supported at all. How Sentinel-1 improves the overall result? Finally, is this "synergy" of these two satellites helpful for plastics detection? The author has to show classification results with Sentinel-1, and the same results without Sentinel-1 in order to compare them and evaluate this synergy.

This feedback is addressed under point 1 above. The model trained with Sentinel-1 cannot be run without it because it is an expected layer, and where missing, the classification is not performed. If we train another model with the Sentinel-1 data as the input, it will be a different model.

11) Last but not least, the discussion section is too short, and the challenges and insights are not discussed at all. There is a big introduction about the plastic pollution issue, some of which are already well-known. The related work is insufficiently described. So far, several machine learning algorithms for plastics detection on Sentinel-2 data have been proposed, but they are not mentioned.

Both the Discussion and Conclusions have been improved. The Discussion now reviews the broader/supporting activities, including references, while the Conclusions have been formulated into a list.

Reviewer 3 Report

The author describes the use of European Copernicus EO data for the classification of plastic debris in different environments. I enjoyed reading this manuscript, and I recommend it being accepted, though after some revision, which I would call minor.

My suggestions for this revision are:

- In general, please make sure that all abbreviations are first spelled out, even if they are well known to the scientific community: NIR, SWIR, ANN.

- l94: temperature differences are usually given in K.

- l227: which was used here, NDWI or NDWI2? The latter was introduced just before.

- Fig.2: I found it difficult to identify the aquatic sites, which should be shown in light blue. Any chance to improve their appearance? Why were those sites chosen, and by whom?

- The text in most figures is too small, worst in Fig.5!

- Fig.4: better move the legend to somewhere else, allowing the reader to identify the hidden curves.

- l356: again, NDWI or NDWI2?

- Fig.7: please add a legend indicating the meaning of the different colors.

- Table 5: the accuracies for Murrum Soil, Tyres, and Greenhouses are impressively high. Suggest that this is being discussed some more (see below).

- Fig.8: suggest to use colors, as the grey coding allows only to distinguish between "above 1000" and "below 1000".

- Fig.9&10: are there any maps available showing the ground truth?

- l438f: predominantly

- Discussion: this is rather short and would benefit from some qualitative and quantitative (see earlier comment on accuracies) discussion of the obtained results, plus limitations of the proposed method(s). Maybe you could also elaborate some more on the benefit of the radar data, which is not obvious from the results presented so far.

- Conclusions: even shorter, just three sentences! Any chance to add more concluding remarks?

 

Author Response

My suggestions for this revision are:

- In general, please make sure that all abbreviations are first spelled out, even if they are well known to the scientific community: NIR, SWIR, ANN.

I have checked these abbreviations, and all are expanded in the text before their usage.

- l94: temperature differences are usually given in K.

Agreed, changed

- l227: which was used here, NDWI or NDWI2? The latter was introduced just before.

Fisser proposed NDWI, but NDWI2 was used in this implementation so a comment has been added

- Fig.2: I found it difficult to identify the aquatic sites, which should be shown in light blue. Any chance to improve their appearance? Why were those sites chosen, and by whom?

The colour of the aquatic sites has changed, so the markers are more visible. As now included in the report, test sites were accumulated over several years by reviewing peer-reviewed papers, reports and news articles on plastic waste and its detection using remote sensing. Not all sites initially identified by this literature search have been included. For some, the plastic waste accumulation size was insufficient for the location to be clearly identified, or there was cloud over the site that had limited known dates for plastic accumulation.

- The text in most figures is too small, worst in Fig.5!

For several figures, including Figure 5, the plots have been transposed, so they are shown as top/bottom rather than left/right allowing them to be bigger.

- Fig.4: better move the legend to somewhere else, allowing the reader to identify the hidden curves.

The legend has been moved.

- l356: again, NDWI or NDWI2?

Yes, a typo, should be NDWI2

- Fig.7: please add a legend indicating the meaning of the different colors.

Legend added.

- Table 5: the accuracies for Murrum Soil, Tyres, and Greenhouses are impressively high. Suggest that this is being discussed some more (see below).

Table 5 has been updated and the explanation improved.

- Fig.8: suggest to use colors, as the grey coding allows only to distinguish between "above 1000" and "below 1000".

Colours have been added to what is now Figure 10.

- Fig.9&10: are there any maps available showing the ground truth?

Figure 5 shows the digitization for the Višegrad Dam and Kuwait tyre sites. The "ground truth" is visual identification of the identified plastics pixels through a combination of very high resolution imagery and the Sentinel-2 data as it has not been possible to visit the sites.

- l438f: predominantly

Changed

- Discussion: this is rather short and would benefit from some qualitative and quantitative (see earlier comment on accuracies) discussion of the obtained results, plus limitations of the proposed method(s). Maybe you could also elaborate some more on the benefit of the radar data, which is not obvious from the results presented so far.

- Conclusions: even shorter, just three sentences! Any chance to add more concluding remarks?

Both the Discussion and Conclusions have been improved. The discussion now reviews the broader/supporting activities, including references, while the conclusions have been formulated into a list.

Round 2

Reviewer 2 Report

The author addressed all raised concerns and enhanced the manuscript by adding extra results, information and discussion. I think that the paper contributes to the body of knowledge.

Back to TopTop